In this paper, we propose a novel architecture, the Enhanced Interactive Transformer (EIT), to address the issue of head degradation in self-attention mechanisms. Our approach replaces the traditional multi-head self-attention mechanism with the Enhanced Multi-Head Attention (EMHA) mechanism, which relaxes the one-to-one mapping constraint among queries and keys, allowing each query to attend to multiple keys. Furthermore, we introduce two interaction models, Inner-Subspace Interaction and Cross-Subspace Interaction, to fully utilize the many-to-many mapping capabilities of EMHA. Extensive experiments on a wide range of tasks (e.g. machine translation, abstractive summarization, grammar correction, language modelling and brain disease automatic diagnosis) show its superiority with a very modest increase in model size.
translated by 谷歌翻译
We present a method for introducing a text encoder into pre-trained end-to-end speech translation systems. It enhances the ability of adapting one modality (i.e., source-language speech) to another (i.e., source-language text). Thus, the speech translation model can learn from both unlabeled and labeled data, especially when the source-language text data is abundant. Beyond this, we present a denoising method to build a robust text encoder that can deal with both normal and noisy text data. Our system sets new state-of-the-arts on the MuST-C En-De, En-Fr, and LibriSpeech En-Fr tasks.
translated by 谷歌翻译
多尺度特征层次结构已在计算机视觉区域的成功中得到了见证。这进一步激发了研究人员设计自然语言处理的多尺度变压器,主要是基于自我发项机制。例如,限制跨头部的接收场或通过卷积提取局部细粒度特征。但是,大多数现有作品都直接建模了本地功能,但忽略了单词边界信息。这导致了缺乏解释性的多余和模棱两可的注意力分布。在这项工作中,我们在不同的语言单元中定义了这些量表,包括子字,单词和短语。我们通过基于单词边界信息和短语级别的先验知识之间建立量表之间的关系来构建多尺度变压器模型。提出的\ textbf {u} niversal \ textbf {m} ulti \ textbf {s} cale \ textbf {t} ransformer,即在两个序列生成任务上评估。值得注意的是,它在几个测试组上的强大基线上产生了一致的性能,而无需牺牲效率。
translated by 谷歌翻译
3D点云的卷积经过广泛研究,但在几何深度学习中却远非完美。卷积的传统智慧在3D点之间表现出特征对应关系,这是对差的独特特征学习的内在限制。在本文中,我们提出了自适应图卷积(AGCONV),以供点云分析的广泛应用。 AGCONV根据其动态学习的功能生成自适应核。与使用固定/各向同性核的解决方案相比,AGCONV提高了点云卷积的灵活性,有效,精确地捕获了不同语义部位的点之间的不同关系。与流行的注意力体重方案不同,AGCONV实现了卷积操作内部的适应性,而不是简单地将不同的权重分配给相邻点。广泛的评估清楚地表明,我们的方法优于各种基准数据集中的点云分类和分割的最新方法。同时,AGCONV可以灵活地采用更多的点云分析方法来提高其性能。为了验证其灵活性和有效性,我们探索了基于AGCONV的完成,DeNoing,Upsmpling,注册和圆圈提取的范式,它们与竞争对手相当甚至优越。我们的代码可在https://github.com/hrzhou2/adaptconv-master上找到。
translated by 谷歌翻译
Existing federated classification algorithms typically assume the local annotations at every client cover the same set of classes. In this paper, we aim to lift such an assumption and focus on a more general yet practical non-IID setting where every client can work on non-identical and even disjoint sets of classes (i.e., client-exclusive classes), and the clients have a common goal which is to build a global classification model to identify the union of these classes. Such heterogeneity in client class sets poses a new challenge: how to ensure different clients are operating in the same latent space so as to avoid the drift after aggregation? We observe that the classes can be described in natural languages (i.e., class names) and these names are typically safe to share with all parties. Thus, we formulate the classification problem as a matching process between data representations and class representations and break the classification model into a data encoder and a label encoder. We leverage the natural-language class names as the common ground to anchor the class representations in the label encoder. In each iteration, the label encoder updates the class representations and regulates the data representations through matching. We further use the updated class representations at each round to annotate data samples for locally-unaware classes according to similarity and distill knowledge to local models. Extensive experiments on four real-world datasets show that the proposed method can outperform various classical and state-of-the-art federated learning methods designed for learning with non-IID data.
translated by 谷歌翻译
The essential task of urban planning is to generate the optimal land-use configuration of a target area. However, traditional urban planning is time-consuming and labor-intensive. Deep generative learning gives us hope that we can automate this planning process and come up with the ideal urban plans. While remarkable achievements have been obtained, they have exhibited limitations in lacking awareness of: 1) the hierarchical dependencies between functional zones and spatial grids; 2) the peer dependencies among functional zones; and 3) human regulations to ensure the usability of generated configurations. To address these limitations, we develop a novel human-instructed deep hierarchical generative model. We rethink the urban planning generative task from a unique functionality perspective, where we summarize planning requirements into different functionality projections for better urban plan generation. To this end, we develop a three-stage generation process from a target area to zones to grids. The first stage is to label the grids of a target area with latent functionalities to discover functional zones. The second stage is to perceive the planning requirements to form urban functionality projections. We propose a novel module: functionalizer to project the embedding of human instructions and geospatial contexts to the zone-level plan to obtain such projections. Each projection includes the information of land-use portfolios and the structural dependencies across spatial grids in terms of a specific urban function. The third stage is to leverage multi-attentions to model the zone-zone peer dependencies of the functionality projections to generate grid-level land-use configurations. Finally, we present extensive experiments to demonstrate the effectiveness of our framework.
translated by 谷歌翻译
Document images are a ubiquitous source of data where the text is organized in a complex hierarchical structure ranging from fine granularity (e.g., words), medium granularity (e.g., regions such as paragraphs or figures), to coarse granularity (e.g., the whole page). The spatial hierarchical relationships between content at different levels of granularity are crucial for document image understanding tasks. Existing methods learn features from either word-level or region-level but fail to consider both simultaneously. Word-level models are restricted by the fact that they originate from pure-text language models, which only encode the word-level context. In contrast, region-level models attempt to encode regions corresponding to paragraphs or text blocks into a single embedding, but they perform worse with additional word-level features. To deal with these issues, we propose MGDoc, a new multi-modal multi-granular pre-training framework that encodes page-level, region-level, and word-level information at the same time. MGDoc uses a unified text-visual encoder to obtain multi-modal features across different granularities, which makes it possible to project the multi-granular features into the same hyperspace. To model the region-word correlation, we design a cross-granular attention mechanism and specific pre-training tasks for our model to reinforce the model of learning the hierarchy between regions and words. Experiments demonstrate that our proposed model can learn better features that perform well across granularities and lead to improvements in downstream tasks.
translated by 谷歌翻译
The spread of misinformation is a prominent problem in today's society, and many researchers in academia and industry are trying to combat it. Due to the vast amount of misinformation that is created every day, it is unrealistic to leave this task to human fact-checkers. Data scientists and researchers have been working on automated misinformation detection for years, and it is still a challenging problem today. The goal of our research is to add a new level to automated misinformation detection; classifying segments of text with persuasive writing techniques in order to produce interpretable reasoning for why an article can be marked as misinformation. To accomplish this, we present a novel annotation scheme containing many common persuasive writing tactics, along with a dataset with human annotations accordingly. For this task, we make use of a RoBERTa model for text classification, due to its high performance in NLP. We develop several language model-based baselines and present the results of our persuasive strategy label predictions as well as the improvements these intermediate labels make in detecting misinformation and producing interpretable results.
translated by 谷歌翻译
个性化的自然语言生成可解释的建议在证明为什么建议可能与用户的兴趣相匹配的原因中起着关键作用。现有模型通常通过软约束(例如〜方面计划)来控制发电过程。在有希望的同时,这些方法难以正确地生成特定的信息,这阻止了产生的解释内容丰富和多样化。在本文中,我们提出了UCEPIC,这是一个解释生成模型,该模型统一了可控个性化生成的方面计划和词汇约束。具体而言,我们首先通过提出的强大插入过程预先培训非人性化的文本生成器,以便模型能够生成包含词汇约束的句子。然后,我们演示了将方面计划和个性化引用纳入插入过程的方法,以获得个性化的解释。与先前由软限制控制的工作相比,UCEPIC结合了来自钥匙拼的特定信息,然后很大程度上提高了生成的解释的多样性和信息性。对RateBeer和Yelp的广泛实验表明,UCEPIC可以为建议产生高质量和不同的解释。
translated by 谷歌翻译
由于在扫描和量化过程中引入了不可避免的噪声,因此通过RGB-D传感器进行的3D重建在几何和纹理中遇到了错误,导致了诸如摄像机漂移,网格失真,纹理幽灵和模糊之类的伪像。考虑到不完美的重建3D模型,大多数以前的方法都集中在几何,纹理或摄像头姿势的完善上。或在以前的关节优化方法中使用了不同的优化方案和优化每个组件的目标,形成了复杂的系统。在本文中,我们提出了一种基于可区分渲染的新型优化方法,该方法通过在渲染结果与相应的RGB-D输入之间执行一致性,将相机姿势,几何形状和纹理的优化整合到统一框架中。基于统一的框架,我们引入了一种联合优化方法,以完全利用几何,纹理和摄像头之间的相互关系,并描述一种自适应交织策略,以提高优化稳定性和效率。使用可区分的渲染,应用图像级的对抗损失用于进一步改善3D模型,从而使其更加逼真。使用定量和定性评估进行合成和真实数据的实验证明了我们在恢复高尺度几何形状和高保真质地方面的优越性。
translated by 谷歌翻译